Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 123
Filter
1.
Comput Math Methods Med ; 2023: 7091301, 2023.
Article in English | MEDLINE | ID: covidwho-20243039

ABSTRACT

Medical imaging refers to the process of obtaining images of internal organs for therapeutic purposes such as discovering or studying diseases. The primary objective of medical image analysis is to improve the efficacy of clinical research and treatment options. Deep learning has revamped medical image analysis, yielding excellent results in image processing tasks such as registration, segmentation, feature extraction, and classification. The prime motivations for this are the availability of computational resources and the resurgence of deep convolutional neural networks. Deep learning techniques are good at observing hidden patterns in images and supporting clinicians in achieving diagnostic perfection. It has proven to be the most effective method for organ segmentation, cancer detection, disease categorization, and computer-assisted diagnosis. Many deep learning approaches have been published to analyze medical images for various diagnostic purposes. In this paper, we review the work exploiting current state-of-the-art deep learning approaches in medical image processing. We begin the survey by providing a synopsis of research works in medical imaging based on convolutional neural networks. Second, we discuss popular pretrained models and general adversarial networks that aid in improving convolutional networks' performance. Finally, to ease direct evaluation, we compile the performance metrics of deep learning models focusing on COVID-19 detection and child bone age prediction.


Subject(s)
COVID-19 , Deep Learning , Child , Humans , Diagnostic Imaging/methods , Neural Networks, Computer , Image Processing, Computer-Assisted/methods
2.
Biomed Res Int ; 2023: 1632992, 2023.
Article in English | MEDLINE | ID: covidwho-2323857

ABSTRACT

Artificial intelligence (AI) scholars and mediciners have reported AI systems that accurately detect medical imaging and COVID-19 in chest images. However, the robustness of these models remains unclear for the segmentation of images with nonuniform density distribution or the multiphase target. The most representative one is the Chan-Vese (CV) image segmentation model. In this paper, we demonstrate that the recent level set (LV) model has excellent performance on the detection of target characteristics from medical imaging relying on the filtering variational method based on the global medical pathology facture. We observe that the capability of the filtering variational method to obtain image feature quality is better than other LV models. This research reveals a far-reaching problem in medical-imaging AI knowledge detection. In addition, from the analysis of experimental results, the algorithm proposed in this paper has a good effect on detecting the lung region feature information of COVID-19 images and also proves that the algorithm has good adaptability in processing different images. These findings demonstrate that the proposed LV method should be seen as an effective clinically adjunctive method using machine-learning healthcare models.


Subject(s)
Artificial Intelligence , COVID-19 , Humans , COVID-19/diagnostic imaging , Diagnostic Imaging , Algorithms , Models, Theoretical , Image Processing, Computer-Assisted/methods
3.
PLoS One ; 18(5): e0285211, 2023.
Article in English | MEDLINE | ID: covidwho-2320346

ABSTRACT

Aerial photography is a long-range, non-contact method of target detection technology that enables qualitative or quantitative analysis of the target. However, aerial photography images generally have certain chromatic aberration and color distortion. Therefore, effective segmentation of aerial images can further enhance the feature information and reduce the computational difficulty for subsequent image processing. In this paper, we propose an improved version of Golden Jackal Optimization, which is dubbed Helper Mechanism Based Golden Jackal Optimization (HGJO), to apply multilevel threshold segmentation to aerial images. The proposed method uses opposition-based learning to boost population diversity. And a new approach to calculate the prey escape energy is proposed to improve the convergence speed of the algorithm. In addition, the Cauchy distribution is introduced to adjust the original update scheme to enhance the exploration capability of the algorithm. Finally, a novel "helper mechanism" is designed to improve the performance for escape the local optima. To demonstrate the effectiveness of the proposed algorithm, we use the CEC2022 benchmark function test suite to perform comparison experiments. the HGJO is compared with the original GJO and five classical meta-heuristics. The experimental results show that HGJO is able to achieve competitive results in the benchmark test set. Finally, all of the algorithms are applied to the experiments of variable threshold segmentation of aerial images, and the results show that the aerial photography images segmented by HGJO beat the others. Noteworthy, the source code of HGJO is publicly available at https://github.com/Vang-z/HGJO.


Subject(s)
Algorithms , Jackals , Animals , Image Processing, Computer-Assisted/methods , Software , Photography
4.
Comput Biol Med ; 157: 106726, 2023 05.
Article in English | MEDLINE | ID: covidwho-2309093

ABSTRACT

Deep learning-based methods have become the dominant methodology in medical image processing with the advancement of deep learning in natural image classification, detection, and segmentation. Deep learning-based approaches have proven to be quite effective in single lesion recognition and segmentation. Multiple-lesion recognition is more difficult than single-lesion recognition due to the little variation between lesions or the too wide range of lesions involved. Several studies have recently explored deep learning-based algorithms to solve the multiple-lesion recognition challenge. This paper includes an in-depth overview and analysis of deep learning-based methods for multiple-lesion recognition developed in recent years, including multiple-lesion recognition in diverse body areas and recognition of whole-body multiple diseases. We discuss the challenges that still persist in the multiple-lesion recognition tasks by critically assessing these efforts. Finally, we outline existing problems and potential future research areas, with the hope that this review will help researchers in developing future approaches that will drive additional advances.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted/methods , Algorithms
5.
Comput Biol Med ; 156: 106718, 2023 04.
Article in English | MEDLINE | ID: covidwho-2308968

ABSTRACT

Cardiovascular diseases (CVD), as the leading cause of death in the world, poses a serious threat to human health. The segmentation of carotid Lumen-intima interface (LII) and Media-adventitia interface (MAI) is a prerequisite for measuring intima-media thickness (IMT), which is of great significance for early screening and prevention of CVD. Despite recent advances, existing methods still fail to incorporate task-related clinical domain knowledge and require complex post-processing steps to obtain fine contours of LII and MAI. In this paper, a nested attention-guided deep learning model (named NAG-Net) is proposed for accurate segmentation of LII and MAI. The NAG-Net consists of two nested sub-networks, the Intima-Media Region Segmentation Network (IMRSN) and the LII and MAI Segmentation Network (LII-MAISN). It innovatively incorporates task-related clinical domain knowledge through the visual attention map generated by IMRSN, enabling LII-MAISN to focus more on the clinician's visual focus region under the same task during segmentation. Moreover, the segmentation results can directly obtain fine contours of LII and MAI through simple refinement without complicated post-processing steps. To further improve the feature extraction ability of the model and reduce the impact of data scarcity, the strategy of transfer learning is also adopted to apply the pretrained weights of VGG-16. In addition, a channel attention-based encoder feature fusion block (EFFB-ATT) is specially designed to achieve efficient representation of useful features extracted by two parallel encoders in LII-MAISN. Extensive experimental results have demonstrated that our proposed NAG-Net outperformed other state-of-the-art methods and achieved the highest performance on all evaluation metrics.


Subject(s)
Cardiovascular Diseases , Carotid Intima-Media Thickness , Humans , Adventitia/diagnostic imaging , Carotid Arteries/diagnostic imaging , Tunica Intima/diagnostic imaging , Image Processing, Computer-Assisted/methods
6.
Med Image Anal ; 86: 102787, 2023 05.
Article in English | MEDLINE | ID: covidwho-2308518

ABSTRACT

X-ray computed tomography (CT) and positron emission tomography (PET) are two of the most commonly used medical imaging technologies for the evaluation of many diseases. Full-dose imaging for CT and PET ensures the image quality but usually raises concerns about the potential health risks of radiation exposure. The contradiction between reducing the radiation exposure and remaining diagnostic performance can be addressed effectively by reconstructing the low-dose CT (L-CT) and low-dose PET (L-PET) images to the same high-quality ones as full-dose (F-CT and F-PET). In this paper, we propose an Attention-encoding Integrated Generative Adversarial Network (AIGAN) to achieve efficient and universal full-dose reconstruction for L-CT and L-PET images. AIGAN consists of three modules: the cascade generator, the dual-scale discriminator and the multi-scale spatial fusion module (MSFM). A sequence of consecutive L-CT (L-PET) slices is first fed into the cascade generator that integrates with a generation-encoding-generation pipeline. The generator plays the zero-sum game with the dual-scale discriminator for two stages: the coarse and fine stages. In both stages, the generator generates the estimated F-CT (F-PET) images as like the original F-CT (F-PET) images as possible. After the fine stage, the estimated fine full-dose images are then fed into the MSFM, which fully explores the inter- and intra-slice structural information, to output the final generated full-dose images. Experimental results show that the proposed AIGAN achieves the state-of-the-art performances on commonly used metrics and satisfies the reconstruction needs for clinical standards.


Subject(s)
Image Processing, Computer-Assisted , Positron-Emission Tomography , Humans , Image Processing, Computer-Assisted/methods , Positron-Emission Tomography/methods , Tomography, X-Ray Computed/methods , Attention
7.
Comput Biol Med ; 158: 106892, 2023 05.
Article in English | MEDLINE | ID: covidwho-2293243

ABSTRACT

Vessel segmentation is significant for characterizing vascular diseases, receiving wide attention of researchers. The common vessel segmentation methods are mainly based on convolutional neural networks (CNNs), which have excellent feature learning capabilities. Owing to inability to predict learning direction, CNNs generate large channels or sufficient depth to obtain sufficient features. It may engender redundant parameters. Drawing on performance ability of Gabor filters in vessel enhancement, we built Gabor convolution kernel and designed its optimization. Unlike traditional filter using and common modulation, its parameters are automatically updated using gradients in the back propagation. Since the structural shape of Gabor convolution kernels is the same as that of regular convolution kernels, it can be integrated into any CNNs architecture. We built Gabor ConvNet using Gabor convolution kernels and tested it using three vessel datasets. It scored 85.06%, 70.52% and 67.11%, respectively, ranking first on three datasets. Results shows that our method outperforms advanced models in vessel segmentation. Ablations also proved that Gabor kernel has better vessel extraction ability than the regular convolution kernel.


Subject(s)
Algorithms , Neural Networks, Computer , Image Processing, Computer-Assisted/methods
8.
Comput Biol Med ; 157: 106683, 2023 05.
Article in English | MEDLINE | ID: covidwho-2264789

ABSTRACT

-Thoracic disease, like many other diseases, can lead to complications. Existing multi-label medical image learning problems typically include rich pathological information, such as images, attributes, and labels, which are crucial for supplementary clinical diagnosis. However, the majority of contemporary efforts exclusively focus on regression from input to binary labels, ignoring the relationship between visual features and semantic vectors of labels. In addition, there is an imbalance in data amount between diseases, which frequently causes intelligent diagnostic systems to make erroneous disease predictions. Therefore, we aim to improve the accuracy of the multi-label classification of chest X-ray images. Chest X-ray14 pictures were utilized as the multi-label dataset for the experiments in this study. By fine-tuning the ConvNeXt network, we got visual vectors, which we combined with semantic vectors encoded by BioBert to map the two different forms of features into a common metric space and made semantic vectors the prototype of each class in metric space. The metric relationship between images and labels is then considered from the image level and disease category level, respectively, and a new dual-weighted metric loss function is proposed. Finally, the average AUC score achieved in the experiment reached 0.826, and our model outperformed the comparison models.


Subject(s)
Deep Learning , X-Rays , Image Processing, Computer-Assisted/methods , Thorax , Semantics
9.
IEEE Trans Med Imaging ; 41(12): 3812-3823, 2022 Dec.
Article in English | MEDLINE | ID: covidwho-2288807

ABSTRACT

The accurate segmentation of multiple types of lesions from adjacent tissues in medical images is significant in clinical practice. Convolutional neural networks (CNNs) based on the coarse-to-fine strategy have been widely used in this field. However, multi-lesion segmentation remains to be challenging due to the uncertainty in size, contrast, and high interclass similarity of tissues. In addition, the commonly adopted cascaded strategy is rather demanding in terms of hardware, which limits the potential of clinical deployment. To address the problems above, we propose a novel Prior Attention Network (PANet) that follows the coarse-to-fine strategy to perform multi-lesion segmentation in medical images. The proposed network achieves the two steps of segmentation in a single network by inserting a lesion-related spatial attention mechanism in the network. Further, we also propose the intermediate supervision strategy for generating lesion-related attention to acquire the regions of interest (ROIs), which accelerates the convergence and obviously improves the segmentation performance. We have investigated the proposed segmentation framework in two applications: 2D segmentation of multiple lung infections in lung CT slices and 3D segmentation of multiple lesions in brain MRIs. Experimental results show that in both 2D and 3D segmentation tasks our proposed network achieves better performance with less computational cost compared with cascaded networks. The proposed network can be regarded as a universal solution to multi-lesion segmentation in both 2D and 3D tasks. The source code is available at https://github.com/hsiangyuzhao/PANet.


Subject(s)
Magnetic Resonance Imaging , Neural Networks, Computer , Magnetic Resonance Imaging/methods , Neuroimaging/methods , Tomography, X-Ray Computed , Image Processing, Computer-Assisted/methods
10.
Methods ; 205: 200-209, 2022 09.
Article in English | MEDLINE | ID: covidwho-2255505

ABSTRACT

BACKGROUND: Lesion segmentation is a critical step in medical image analysis, and methods to identify pathology without time-intensive manual labeling of data are of utmost importance during a pandemic and in resource-constrained healthcare settings. Here, we describe a method for fully automated segmentation and quantification of pathological COVID-19 lung tissue on chest Computed Tomography (CT) scans without the need for manually segmented training data. METHODS: We trained a cycle-consistent generative adversarial network (CycleGAN) to convert images of COVID-19 scans into their generated healthy equivalents. Subtraction of the generated healthy images from their corresponding original CT scans yielded maps of pathological tissue, without background lung parenchyma, fissures, airways, or vessels. We then used these maps to construct three-dimensional lesion segmentations. Using a validation dataset, Dice scores were computed for our lesion segmentations and other published segmentation networks using ground truth segmentations reviewed by radiologists. RESULTS: The COVID-to-Healthy generator eliminated high Hounsfield unit (HU) voxels within pulmonary lesions and replaced them with lower HU voxels. The generator did not distort normal anatomy such as vessels, airways, or fissures. The generated healthy images had higher gas content (2.45 ± 0.93 vs 3.01 ± 0.84 L, P < 0.001) and lower tissue density (1.27 ± 0.40 vs 0.73 ± 0.29 Kg, P < 0.001) than their corresponding original COVID-19 images, and they were not significantly different from those of the healthy images (P < 0.001). Using the validation dataset, lesion segmentations scored an average Dice score of 55.9, comparable to other weakly supervised networks that do require manual segmentations. CONCLUSION: Our CycleGAN model successfully segmented pulmonary lesions in mild and severe COVID-19 cases. Our model's performance was comparable to other published models; however, our model is unique in its ability to segment lesions without the need for manual segmentations.


Subject(s)
COVID-19 , Image Processing, Computer-Assisted , COVID-19/diagnostic imaging , Humans , Image Processing, Computer-Assisted/methods , Lung/diagnostic imaging , Tomography, X-Ray Computed/methods
11.
Comput Biol Med ; 155: 106698, 2023 03.
Article in English | MEDLINE | ID: covidwho-2264677

ABSTRACT

The COVID-19 pandemic has extremely threatened human health, and automated algorithms are needed to segment infected regions in the lung using computed tomography (CT). Although several deep convolutional neural networks (DCNNs) have proposed for this purpose, their performance on this task is suppressed due to the limited local receptive field and deficient global reasoning ability. To address these issues, we propose a segmentation network with a novel pixel-wise sparse graph reasoning (PSGR) module for the segmentation of COVID-19 infected regions in CT images. The PSGR module, which is inserted between the encoder and decoder of the network, can improve the modeling of global contextual information. In the PSGR module, a graph is first constructed by projecting each pixel on a node based on the features produced by the encoder. Then, we convert the graph into a sparsely-connected one by keeping K strongest connections to each uncertainly segmented pixel. Finally, the global reasoning is performed on the sparsely-connected graph. Our segmentation network was evaluated on three publicly available datasets and compared with a variety of widely-used segmentation models. Our results demonstrate that (1) the proposed PSGR module can capture the long-range dependencies effectively and (2) the segmentation model equipped with this PSGR module can accurately segment COVID-19 infected regions in CT images and outperform all other competing models.


Subject(s)
COVID-19 , Image Processing, Computer-Assisted , Humans , Image Processing, Computer-Assisted/methods , Pandemics , Neural Networks, Computer , Tomography, X-Ray Computed/methods
12.
Cytometry A ; 103(6): 492-499, 2023 06.
Article in English | MEDLINE | ID: covidwho-2246697

ABSTRACT

Microvascular thrombosis is a typical symptom of COVID-19 and shows similarities to thrombosis. Using a microfluidic imaging flow cytometer, we measured the blood of 181 COVID-19 samples and 101 non-COVID-19 thrombosis samples, resulting in a total of 6.3 million bright-field images. We trained a convolutional neural network to distinguish single platelets, platelet aggregates, and white blood cells and performed classical image analysis for each subpopulation individually. Based on derived single-cell features for each population, we trained machine learning models for classification between COVID-19 and non-COVID-19 thrombosis, resulting in a patient testing accuracy of 75%. This result indicates that platelet formation differs between COVID-19 and non-COVID-19 thrombosis. All analysis steps were optimized for efficiency and implemented in an easy-to-use plugin for the image viewer napari, allowing the entire analysis to be performed within seconds on mid-range computers, which could be used for real-time diagnosis.


Subject(s)
COVID-19 , Thrombosis , Humans , Blood Platelets , Image Processing, Computer-Assisted/methods , Neural Networks, Computer
13.
Med Image Anal ; 86: 102771, 2023 05.
Article in English | MEDLINE | ID: covidwho-2246448

ABSTRACT

Automatic lesion segmentation on thoracic CT enables rapid quantitative analysis of lung involvement in COVID-19 infections. However, obtaining a large amount of voxel-level annotations for training segmentation networks is prohibitively expensive. Therefore, we propose a weakly-supervised segmentation method based on dense regression activation maps (dRAMs). Most weakly-supervised segmentation approaches exploit class activation maps (CAMs) to localize objects. However, because CAMs were trained for classification, they do not align precisely with the object segmentations. Instead, we produce high-resolution activation maps using dense features from a segmentation network that was trained to estimate a per-lobe lesion percentage. In this way, the network can exploit knowledge regarding the required lesion volume. In addition, we propose an attention neural network module to refine dRAMs, optimized together with the main regression task. We evaluated our algorithm on 90 subjects. Results show our method achieved 70.2% Dice coefficient, substantially outperforming the CAM-based baseline at 48.6%. We published our source code at https://github.com/DIAGNijmegen/bodyct-dram.


Subject(s)
COVID-19 , Humans , COVID-19/diagnostic imaging , Neural Networks, Computer , Tomography, X-Ray Computed/methods , Algorithms , Image Processing, Computer-Assisted/methods
15.
Med Image Anal ; 84: 102726, 2023 02.
Article in English | MEDLINE | ID: covidwho-2159543

ABSTRACT

Deep convolutional neural networks (CNNs) have been widely used for medical image segmentation. In most studies, only the output layer is exploited to compute the final segmentation results and the hidden representations of the deep learned features have not been well understood. In this paper, we propose a prototype segmentation (ProtoSeg) method to compute a binary segmentation map based on deep features. We measure the segmentation abilities of the features by computing the Dice between the feature segmentation map and ground-truth, named as the segmentation ability score (SA score for short). The corresponding SA score can quantify the segmentation abilities of deep features in different layers and units to understand the deep neural networks for segmentation. In addition, our method can provide a mean SA score which can give a performance estimation of the output on the test images without ground-truth. Finally, we use the proposed ProtoSeg method to compute the segmentation map directly on input images to further understand the segmentation ability of each input image. Results are presented on segmenting tumors in brain MRI, lesions in skin images, COVID-related abnormality in CT images, prostate segmentation in abdominal MRI, and pancreatic mass segmentation in CT images. Our method can provide new insights for interpreting and explainable AI systems for medical image segmentation. Our code is available on: https://github.com/shengfly/ProtoSeg.


Subject(s)
COVID-19 , Neoplasms , Humans , Image Processing, Computer-Assisted/methods , Neural Networks, Computer
16.
Sensors (Basel) ; 22(24)2022 Dec 08.
Article in English | MEDLINE | ID: covidwho-2155244

ABSTRACT

We propose a new generative model named adaptive cycle-consistent generative adversarial network, or Ad CycleGAN to perform image translation between normal and COVID-19 positive chest X-ray images. An independent pre-trained criterion is added to the conventional Cycle GAN architecture to exert adaptive control on image translation. The performance of Ad CycleGAN is compared with the Cycle GAN without the external criterion. The quality of the synthetic images is evaluated by quantitative metrics including Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Peak Signal-to-Noise Ratio (PSNR), Universal Image Quality Index (UIQI), visual information fidelity (VIF), Frechet Inception Distance (FID), and translation accuracy. The experimental results indicate that the synthetic images generated either by the Cycle GAN or by the Ad CycleGAN have lower MSE and RMSE, and higher scores in PSNR, UIQI, and VIF in homogenous image translation (i.e., Y → Y) compared to the heterogenous image translation process (i.e., X → Y). The synthetic images by Ad CycleGAN through the heterogeneous image translation have significantly higher FID score compared to Cycle GAN (p < 0.01). The image translation accuracy of Ad CycleGAN is higher than that of Cycle GAN when normal images are converted to COVID-19 positive images (p < 0.01). Therefore, we conclude that the Ad CycleGAN with the independent criterion can improve the accuracy of GAN image translation. The new architecture has more control on image synthesis and can help address the common class imbalance issue in machine learning methods and artificial intelligence applications with medical images.


Subject(s)
Artificial Intelligence , COVID-19 , Humans , X-Rays , Image Processing, Computer-Assisted/methods , COVID-19/diagnostic imaging , Machine Learning
17.
Comput Intell Neurosci ; 2022: 4431817, 2022.
Article in English | MEDLINE | ID: covidwho-2088975

ABSTRACT

During the COVID-19 pandemic, huge interstitial lung disease (ILD) lung images have been captured. It is high time to develop the efficient segmentation techniques utilized to separate the anatomical structures and ILD patterns for disease and infection level identification. The effectiveness of disease classification directly depends on the accuracy of initial stages like preprocessing and segmentation. This paper proposed a hybrid segmentation algorithm designed for ILD images by taking advantage of superpixel and K-means clustering approaches. Segmented superpixel images adapt the better irregular local and spatial neighborhoods that are helpful to improving the performance of K-means clustering-based ILD image segmentation. To overcome the limitations of multiclass belongings, semiadaptive wavelet-based fusion is applied over selected K-means clusters. The performance of the proposed SPFKMC was compared with that of 3-class Fuzzy C-Means clustering (FCM) and K-Means clustering in terms of accuracy, Jaccard similarity index, and Dice similarity coefficient. The SPFKMC algorithm gives an accuracy of 99.28%, DSC 98.72%, and JSI 97.87%. The proposed Fused Clustering gives better results as compared to traditional K-means clustering segmentation with wavelet-based fused cluster results.


Subject(s)
COVID-19 , Lung Diseases, Interstitial , Humans , Fuzzy Logic , Pandemics , Algorithms , Lung Diseases, Interstitial/diagnostic imaging , Image Processing, Computer-Assisted/methods
18.
Sensors (Basel) ; 22(20)2022 Oct 17.
Article in English | MEDLINE | ID: covidwho-2071712

ABSTRACT

Research on face recognition with masked faces has been increasingly important due to the prolonged COVID-19 pandemic. To make face recognition practical and robust, a large amount of face image data should be acquired for training purposes. However, it is difficult to obtain masked face images for each human subject. To cope with this difficulty, this paper proposes a simple yet practical method to synthesize a realistic masked face for an unseen face image. For this, a cascade of two convolutional auto-encoders (CAEs) has been designed. The former CAE generates a pose-alike face wearing a mask pattern, which is expected to fit the input face in terms of pose view. The output of the former CAE is readily fed into the secondary CAE for extracting a segmentation map that localizes the mask region on the face. Using the segmentation map, the mask pattern can be successfully fused with the input face by means of simple image processing techniques. The proposed method relies on face appearance reconstruction without any facial landmark detection or localization techniques. Extensive experiments with the GTAV Face database and Labeled Faces in the Wild (LFW) database show that the two complementary generators could rapidly and accurately produce synthetic faces even for challenging input faces (e.g., low-resolution face of 25 × 25 pixels with out-of-plane rotations).


Subject(s)
COVID-19 , Facial Recognition , Humans , Pandemics , Image Processing, Computer-Assisted/methods , Databases, Factual
19.
Comput Med Imaging Graph ; 102: 102127, 2022 Dec.
Article in English | MEDLINE | ID: covidwho-2061035

ABSTRACT

Supervised deep learning has become a standard approach to solving medical image segmentation tasks. However, serious difficulties in attaining pixel-level annotations for sufficiently large volumetric datasets in real-life applications have highlighted the critical need for alternative approaches, such as semi-supervised learning, where model training can leverage small expert-annotated datasets to enable learning from much larger datasets without laborious annotation. Most of the semi-supervised approaches combine expert annotations and machine-generated annotations with equal weights within deep model training, despite the latter annotations being relatively unreliable and likely to affect model optimization negatively. To overcome this, we propose an active learning approach that uses an example re-weighting strategy, where machine-annotated samples are weighted (i) based on the similarity of their gradient directions of descent to those of expert-annotated data, and (ii) based on the gradient magnitude of the last layer of the deep model. Specifically, we present an active learning strategy with a query function that enables the selection of reliable and more informative samples from machine-annotated batch data generated by a noisy teacher. When validated on clinical COVID-19 CT benchmark data, our method improved the performance of pneumonia infection segmentation compared to the state of the art.


Subject(s)
COVID-19 , Deep Learning , Humans , Imaging, Three-Dimensional/methods , Supervised Machine Learning , Tomography, X-Ray Computed , Image Processing, Computer-Assisted/methods
20.
IEEE Trans Image Process ; 31: 5893-5908, 2022.
Article in English | MEDLINE | ID: covidwho-2042822

ABSTRACT

Accurate image segmentation plays a crucial role in medical image analysis, yet it faces great challenges caused by various shapes, diverse sizes, and blurry boundaries. To address these difficulties, square kernel-based encoder-decoder architectures have been proposed and widely used, but their performance remains unsatisfactory. To further address these challenges, we present a novel double-branch encoder architecture. Our architecture is inspired by two observations. (1) Since the discrimination of the features learned via square convolutional kernels needs to be further improved, we propose utilizing nonsquare vertical and horizontal convolutional kernels in a double-branch encoder so that the features learned by both branches can be expected to complement each other. (2) Considering that spatial attention can help models to better focus on the target region in a large-sized image, we develop an attention loss to further emphasize the segmentation of small-sized targets. With the above two schemes, we develop a novel double-branch encoder-based segmentation framework for medical image segmentation, namely, Crosslink-Net, and validate its effectiveness on five datasets with experiments. The code is released at https://github.com/Qianyu1226/Crosslink-Net.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Algorithms , Attention , Image Processing, Computer-Assisted/methods
SELECTION OF CITATIONS
SEARCH DETAIL